In droid we trust – trust in friendly robots in Star Wars

Writer is a doctoral researcher at Tampere University, Technology x Social Interaction research group.

Warning: contains Star Wars spoilers!

Thanks to my BF, I just recently watched all the Star Wars films in a few weeks. I couldn’t help but notice the remarkable relationship between humans and robots (droids, to be specific) in the galactic universe. What an ideal example of seamless human-computer interaction! Since my Ph.D. research focuses on trust in technology, I got interested in examining how trust in technology manifests in human-droid interaction, focusing on the friendly droids: R2-D2, C-3PO and BB-8 — yes, I will be staying on the light side of the Force.

What is trust in technology, actually?

Trust is a double-edged sword. Even though it plays a big role in our everyday interactions (and life in general) it is not easy to define. Both trust in technology and trust in humans advocates the trust definition by Mayer, Davis, and Schoorman (1995), where trust is defined as “the willingness to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor irrespective of the ability to monitor or control the other party”. In Star Wars, the droids are seen as individuals: they have autonomy, although they might be committed to following certain characters. They act for the common good when needed, without waiting for specific commands. Luckily — it’s not just once or twice that droids have saved the whole galaxy from total destruction.

Celebrating the Rebel victory at Endor — all together!

Vulnerability, expectations, lack-of-control — these are just some keywords to conceptualize trust. But the most significant element in the trust is interaction: trust forms in relation to ‘the other’. We form trusting beliefs about the persons or technologies necessary attributes to perform as expected. These external variables, expectations, define what kind of trust relationship we build. In human-droid interaction, droids are expected to function correctly when needed, and with droids’ versatile technological capabilities, they usually know exactly what they should do in certain situations.

Thus, we can expect technology to function properly, or our partner to be faithful. Still, despite our expectations, the other party might fail us — and we’re totally aware of that. Trusting beliefs are thus followed by trusting intentions: while we cannot control the other party’s behavior, we are willing to take a risk: to become vulnerable and depend upon the other party.

In Star Wars, trust in technology and droids is remarkably solid — hardly anyone questions either droid’s loyalty or functionality. The willingness to depend upon the other (the trustee) is an action manifestation of trusting behavior. This becomes visible for example in A New Hope when Leia pleads for the help of Obi-Wan Kenobi using R2-D2 as a messenger. Even though this task does not go as planned (the droids are captured and sold to Luke’s uncle — the rest is history), the trust toward these little fellas is not reduced.

Besides, droids have always been around. Droids are trusted because they are already familiar and proved as trustworthy: a situation where the trustor knows the other party well enough to predict its behavior is defined as knowledge-based trust (Lewicki & Bunker 1996, McKnight 2011). The more we use certain technologies, the more easily we form trust in these technologies. This applies also in the human-robot trust relationship: the amount of experience that users have interacting with the robot may influence trust formation (Koay et al., 2007).

“Never underestimate a droid” — Leia

Trust is dynamic — and individual. While trust in droids is mostly solid, there are few exceptions. Rey, for instance, seems quite skeptical when BB-8 enters her life in The Force Awakens. As a relatively poor scavenger, she might not be used to droids. Trust in an unfamiliar trustee, where we do not have credible information about the other party, is called initial trust (Bigley & Pierce 1998). To make trusting decisions in this kind of situation, we reflect on our social environment: for example, asking our friends’ opinions. Even though Rey is not directly asking about the trustworthiness of droids, Leia gently encourages her by reminding that she should “never underestimate a droid”.

Wherever you will go, droids will have your back.

Even when completely new droids appear, trust in droids doesn’t falter. Not even once in the whole galactic universe, someone is worried about the safety or privacy of droids. D-O is a good example of this blind trust. This small and submissive hairdryer-lookalike was once owned by the Sith assassin Ochi of Bestoon. In The Rise of Skywalker, D-O plays an important role in delivering information about the hidden world of the Sith, Exegol. Do you remember someone questioning the truthfulness of this information? Yep, no one did. Moreover, also interesting aspect in droid-human interaction is the deployment of the (droid) technology: although droids might act as a sidekick of certain characters, the formation of this relationship is very intuitive. Basically, it just happens. For example, D-O accepted its role under a new “ownership” without any kind of authentication. And this goes both ways.

The question of right and wrong — volition and moral agency in trust literature — is probably the biggest difference in trust in humans and trust in technology. As humans can make moral decisions, technology can only be expected to do what it is programmed to do. Thus IT-related trust reflects beliefs about a technology characteristic rather than its will or motives (see e.g. McKnight 2011.) While it sometimes seems so, droids do not have cognitive capabilities. This is proven by Obi-Wan Kenobi in Attack of the clones: “Well, if droids could think, there’d be none of us here, would there?” Somehow this sounds a bit familiar what comes to certain concerns about AI.

Technologies “humanness” influences trust

Ever felt sad when your favorite droid was heading to its apparent decease? Been there. We feel emotionally attached to droids because they seem so human-like. Like robots, droids include social cues: they interact either with real language or droidspeak, physical expressions, color or movements. Besides, they express emotions. All of this influences trust in robots (Tapus et al., 2007; Martelaro et al., 2016)

The design of interaction might affect trust toward particular technology (McKnight, 2005; Lankton, 2015). Since droids promote human-like technology, trust in droids is based on beliefs about the trustee’s competence, benevolence, or integrity (compared to technology’s helpfulness, reliability, and functionality, see e.g. McKnight, 2011; Lankton et al., 2015). This means droids are seen competent at their actions: they have skills that help them to succeed in a specific domain. Integrity refers to the predictability: trustee behaves according to the trustor’s perception. Benevolence is seen as the extent to which the trustee is believed to want to do good or help the trustor (McKnight, 2011; Mayer, 1995).

Fun fact: especially vulnerability has been shown to improve likability and influence trust in robots (Siino et al., 2008). Probably this has something to do with our fear of omnipotent technology. Anyhow, it might explain why we love our droids in Star Wars so much. Droids do make errors. Just like us!

“Don’t worry. We have R2 with us.”– Anakin

The quote above reveals pretty much about droid’s role in the Star Wars: as long as droids are around, all is good. Droids are seen as loyal partners who fearlessly follow their human counterparts to the darkest corner of the galactic universe (though 3-CPO expresses some concerns time-to-time). For a human-robot team to accomplish its goal, humans must trust that their robotic team-mate will protect the interests and welfare of every other individual on the team (Hancock et al., 2011). And this is obvious in Star Wars.

Even though droids’ technical abilities are useful, even lifesaving, they don’t exist in the Star Wars universe only due to their skills. For example: when Padmé and Anakin are having their secret wedding ceremony on Naboo, who is there to witness? R2-D2 and C-3PO. They are not there to protect them, they are there as friends. And a true friend you can trust — always.

R2-D2 and C-3PO honoring the secret wedding ceremony of Padmé and Anakin — exclusively.

Photos: Wookieepedia, the Star Wars Wiki

Sources:

Bigley, G. A., Pearce, J. L. 1998. Straining for shared meaning in organization science: Problems of trust and distrust. The Academy of Management Review, 23(3), 405–421.

Hancock P. A., BillingstaD. R., Schaefer. E., Chen J. Y. C., de Visser. J., Parasuraman, R. 2011. A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction. Hum. Factors, vol. 53, no. 5, pp. 517–527, Oct. 2011

Harwood T., Garry T. 2017. Internet of Things: understanding trust in techno-service systems. Journal of Service Management 28, 3, Mar. 2017, 442–475.

Koay, K. L., Syrdal, D. S., Walters, M. L., Dautenhahn, K .2007. Living with robots: Investigating the habituation effect in participants’ preferences during a longitudinal human-robot interaction study. In Robot and Human interactive Communication, 2007. RO-MAN 2007. The 16th IEEE International Symposium on, pages 564–569. IEEE, 2007.

Lewicki, R.J., Bunker, B.B. 1996. Developing and maintaining trust in work relationships. In Trust in Organizations: Frontiers of Theory and Research, R. Kramer and T. Tyler Eds., Sage Publications, Thousand Oaks, CA, 114–139.

Martelaro N., Nneji V. C., Ju W., Hinds P. 2016. Tell me more: Designing hri to encourage more trust, disclosure, and companionship. In Proceeding HRI ’16 The Eleventh ACM/IEEE International Conference on Human Robot Interaction, pages 181–188. IEEE Press Piscataway, 2016.

McKnight, D.H. 2005. Trust in information technology. In The Blackwell Encyclopedia of Management, Management Information Systems,G.B.Davis,Ed.,Blackwell,Malden, MA,Vol.7, 329–331.

Mcknight, D. H., Carter, M., Thatcher, J. B., & Clay, P. F. 2011. Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems, 2(2).

Mayer, R. C., Davis,, J. H.,Schoorman, F. D. 1995. An Integrative Model of Organizational Trust. Acad. Manage. Rev., vol. 20, no. 3, p. 709, Jul. 1995.

Salem., Lakatos G., Amirabdollahian F., Dautenhahn K.2015. Would You Trust a (Faulty) Robot?: Effects of Error, Task Type and Personality on Human-Robot Cooperation and Trust. HRI’15, 2015, pp. 141–148.

Siino, R. M., Chung J., Hinds, P. J. 2008. Colleague vs. tool: Effects of disclosure in human-robot collaboration.ROMAN ’08, 2008, pp. 558–562.

Tapus A., Mataric M. J., Scasselati, B.2007. Socially assistive robotics [Grand Challenges of Robotics]. IEEE Robot. Autom. Mag., vol. 14, no. 1, pp. 35–42, Mar. 2007.

Leave a comment